Victory in the Vetter v. Resnik Lawsuit: Artist Rights, Songwriter Advocacy, and the Power of Termination

At the center of Vetter v. Resnik are songwriters reclaiming what Congress promised them in a detailed and lengthy legislative negotiation over the 1976 revision to the Copyright Act—a meaningful second chance to terminate what the courts call “unremunerative transfers,” aka crappy deals. That principle comes into sharp focus through Cyril Vetter, whose perseverance brought this case to the Fifth Circuit, and Cyril’s attorney Tim Kappel, whose decades-long advocacy for songwriter rights helped frame the issues not as abstractions, but as lived realities.

Cyril won his case against his publisher at trial in a landmark judicial ruling by Chief Judge Shelly Dick. His publisher appealed Judge Dick’s ruling to the Fifth Circuit. As readers will remember, oral arguments in the case were earlier this year. A bunch of songwriter and author groups including the Artist Rights Institute filed “friend of the court” briefs in the case in favor of Cyril.

In a unanimous opinion, the United States Court of Appeals for the Fifth Circuit affirmed Judge Shelly Dick’s carefully reasoned trial-court ruling, holding that when an author terminates a worldwide grant, the recapture also worldwide. It is not artificially limited to U.S. territory only, which had been the industry practice. The court understood that anything less would hollow out Congress’s intent.

It is often said that the whole point of the termination law is to give authors (including songwriters) a “second bite at the apple”. Which is why the Artist Rights Institute wrote (and was quoted by the 5th Circuit) that limiting the reversion to US rights only is a “second bite at half the apple” which was the opposite of Congressional intent.

25-30108-2026-01-12Download



What made this 5th Circuit decision especially meaningful for the creative community is that the Fifth Circuit did not reach it in a vacuum. Writing for the panel, Judge Carl Stewart expressly quoted the Artist Rights Institute amicus brief, observing:

“Denying terminating authors the full return of a worldwide grant leaves them with only half of the apple—the opposite of congressional intent.”

That sentence—simple, vivid, and unmistakably human—captured what this case has always been about.

ARI.Amicus.Vetter.Final RDownload



The Artist Rights Institute’s amicus brief did not appear overnight. It grew out of a longstanding relationship between songwriter advocate Tim Kappel and Chris Castle, a collaboration shaped over many years by shared concern for how statutory rights actually function—or fail to function—for creators in the real world.

When the Vetter appeal crystallized the stakes, that history mattered. It allowed ARI to move quickly, confidently, and with credibility—translating dense statutory language into a narrative to help courts understand that termination rights are supposed to restore leverage, not preserve a publisher’s foreign control veto through technicalities.

Crucially, the brief was inspired and strengthened by the voices of songwriter advocates and heirs, including Abby North (heir of composer Alex North), Blake Morgan (godson of songwriter Lesley Gore), and Angela Rose White (heir of legendary music director David Rose) and of course David Lowery and Nikki Rowling. The involvement of these heirs ensured the court understood context—termination is not merely about renegotiating deals for living authors. It is often about families, estates, and heirs—people for whom Congress explicitly preserved termination rights as a matter of intergenerational fairness.

The Fifth Circuit’s opinion reflects that understanding. By rejecting a cramped territorial reading of termination, the court avoided a result that would have undermined heirs’ rights just as surely as authors’ rights.

Vetter v. Resnik represents a rare and welcome alignment: an author willing to press his statutory rights all the way, advocates who understood the lived experience behind those rights, a district judge who took Congress at its word, and an appellate court willing to say plainly that “half of the apple” is not enough.

For the Artist Rights Institute, it was an honor to participate—to stand alongside Cyril Vetter, Tim Kappel, and the community of songwriter advocates and heirs whose experiences shaped a brief that helped the court see the full picture.

And for artists, songwriters, and their families, the decision stands as a reminder that termination rights mean what Congress said they mean—a real chance to reclaim ownership, not an illusion bounded by geography.

@ArtistRights Institute Newsletter 01/05/26: Grok Can’t Control Itself, CRB V Starts, Data Center Rebellion, Sarah Wynn-Williams Senate Testimony, Copyright Review

Artist Rights Institute logo - Artist Rights Weekly newsletter

Phonorecords V Commencement Notice: Government setting song mechanical royalty rates

The Copyright Royalty Judges announce the commencement of a proceeding to determine reasonable rates and terms for making and distributing phonorecords for the period beginning January 1, 2028, and ending December 31, 2032. Parties wishing to participate in the rate determination proceeding must file their Petition to Participate and the accompanying $150 filing fee no later than 11:59 p.m. eastern time on January 30, 2026. Deets here.

US Mechanical Rate Increase

Songwriters Will Get Paid More for Streaming Royalties Starting Today (Erinn Callahan/AmericanSongwriter)

CRB Sets 2026 Mechanical Rate at 13.1¢ (Chris Castle/MusicTechPolicy)

Spotify’s Hack by Anna’s Archive

No news. Biggest music hack in history still stolen.

MLC Redesignation

The MMA’s Unconstitutional Unclaimed Property Preemption: How Congress Handed Protections to Privatize Escheatment (Chris Castle/MusicTechPolicy)

Under the Radar: Data Center Grass Roots Rebellion

Data Center Rebellion (Chris Castle/MusicTechSolutions)

The Data Center Rebellion is Here and It’s Reshaping the Political Landscape (Washington Post)

Residents protest high-voltage power lines that could skirt Dinosaur Valley State Park (ALEJANDRA MARTINEZ AND PAUL COBLER/Texas Tribune)

US Communities Halt $64B Data Center Expansions Amid Backlash (Lucas Greene/WebProNews)

Big Tech’s fast-expanding plans for data centers are running into stiff community opposition (Marc Levy/Associated Press)

Data center ‘gold rush’ pits local officials’ hunt for new revenue against residents’ concerns (Alander Rocha/Georgia Record)

AI Policy

Meet the New AI Boss, Worse Than the Old Internet Boss (Chris Castle/MusicTechPolicy)

Deloitte’s AI Nightmare: Top Global Firm Caught Using AI-Fabricated Sources to Support its Policy Recommendations (Hugh Stephens/Hugh Stephens Blog)

Grok Can’t Stop AI Exploitation of Women

Facebook/Meta Whistleblower Testifies at US Senate

Copyright Case 2025 Review

Year in Review: The U.S. Copyright Office (George Thuronyi/Library of Congress)

Copyright Cases: 2025 Year in Review (Rachel Kim/Copyright Alliance)

AI copyright battles enter pivotal year as US courts weigh fair use (Blake Brittain/Reuters)

2026 Music Predictions: The Legal and Policy Fault Lines Ahead

By Chris Castle

I was grateful to Hypebot for publishing my 2026 music‑industry predictions, which focused on the legal and structural pressures already reshaping the business. For regular readers, I’m reposting those predictions here—and adding a few more that follow directly from the policy work, regulatory engagement, and royalty‑system scrutiny we’ve been immersed in over the past year with the Artist Rights Institute. These additional observations are less about trend‑spotting and more about where the underlying legal and institutional logic appears to be heading next.

1. AI Copyright Litigation Will Move From Abstract Theory to Operational Discovery

In 2026, the center of gravity in AI‑copyright cases will shift toward discovery that exposes how models are trained, weighted, filtered, and monetized. Courts will increasingly treat AI systems as commercial products rather than research experiments, and discovery fights for the good of humanity…ahem…rather than summary judgment rhetoric. The result will be pressure on platforms to settle, license, or restructure before full disclosure occurs particularly since it’s becoming increasingly likely that every frontier AI lab as ripped off the world’s culture the old fashioned way—they stole it off the Internet.

The next round of AI copyright litigation will come from fans: As more deals are done with AI like the Disney/Sora deal, fans who use Sora or other AI to create separatable rights with AI (like new characters, new story lines) or even new universes with old story lines (like maybe new versions of the Luke/Darth/Hans/Leia arc in the Old West) will start to get the idea that their IP is…well…their IP. If it’s used without compensating them or getting their permission, that whole copyright thing is going to start to get real for them.

2. Streaming Platforms Will Face Structural Payola Scrutiny, Not Just Royalty Complaints

Minimum‑payment thresholds, bundled offerings, and “greater‑of” formulas will no longer be treated as isolated business choices. Regulators and courts will begin to examine how these mechanisms function together to shift risk onto artists while preserving platform margins. Antitrust, consumer‑protection, and unfair‑competition theories will increasingly converge around the same conduct. Due to Spotify’s market dominance and intimidation factor for majors and big to medium sized independent labels, these cases will have to come from independent artists.

3. The Copyright Office Will Approve a Conditional Redesignation of the MLC

Rather than granting an unconditional redesignation of the Mechanical Licensing Collective, the Copyright Office is likely to impose conditions tied to governance, transparency, and financial stewardship. This approach allows continuity for licensees while asserting supervisory authority grounded in the statute. The message will be clear: designation is provisional, not permanent.

Digital-Licensing-Coordinator-to-USCO-2-Sept-22-2025Download

4. The MLC’s Gundecked Investment Policy Will Be Unwound or Materially Rewritten

The practice of investing unmatched royalties as a pooled asset is becoming legally and politically indefensible. In 2026, expect the investment policy to be unwound or rewritten by new regulations to require pass‑through of gains, or strict capital‑preservation limits. Once framed as a fiduciary issue rather than a finance strategy, the current model cannot survive intact.

It’s also worth noting that the MLC’s investment portfolio has grown so large ($1.212 billion) that its investment income reported on its 2023 tax return has also grown to an amount in excess of its operating costs as measured by the administrative assessment paid by licensees.

5. An MLC Independent Royalty‑Accounting and Systems Review Will Become Inevitable

As part of a conditional redesignation, the Copyright Office may require an end‑to‑end operational review of the MLC by a top‑tier royalty‑accounting firm. Unlike a SOC report, such a review would examine whether matching, data logic, and distributions actually produce correct outcomes. Once completed, that analysis would shape litigation, policy reform, and future oversight.

6. Foreign CMOs Will Push Toward Licensee‑Pays Models

Outside the U.S., collective management organizations face rising technology costs and political scrutiny over compensation. In response, many will explore shifting more costs to licensees rather than members, reframing CMOs as infrastructure providers. Ironically, the U.S. MLC experiment may accelerate this trend abroad given the MLC’s rich salaries and vast resources for developing poorly implemented tech.

These developments are not speculative in the abstract. They follow from incentives already in motion, records already being built, and institutions increasingly unable to rely on deference alone.

7.  Environmental Harms of AI Become a Core Climate Issue

We will start to see the AI labs normalize the concept of private energy generation on a massive scale to support data centers built in current green spaces.  If they build or buy electric plants they do not intend to share.  This whole thing about they will build small nuclear reactors and sell excess back to the local grid is crazy—there won’t be any excess and what about their behavior over the last 25 years makes you think they’ll share a thing?

So some time after Los Angeles rezones Griffith Park commercial and sells the Greek Theater to Google for a new data center and private nuclear reactor and Facebook buys the Diablo Canyon reactor, the Music Industry Climate Collective will formally integrate AI’s ecological footprint into their national and international policy agendas. After mounting evidence of data‑center water depletion, aquifer stress, and grid destabilization — particularly in drought‑prone regions — climate coalitions will conceptually reclassify AI infrastructure as a high‑impact industrial activity.

This will become acute after people realize they cannot expect the state or federal government to require new state permitting regimes because of the overwhelming political influence of Big Tech in the form of AI Viceroy-for-Life David Sacks. (He’s not going anywhere in a post-Trump era.). This will lead to environmental‑justice litigation over siting decisions and pressure to require reporting of AI‑related energy, water, and land use.

8.  Criminal RICO Case Against StubHub and Affiliated Resale Networks

By late 2026, the Department of Justice brings a landmark criminal RICO indictment targeting StubHub‑linked reseller networks and individual reseller financiers for systemic ticketing fraud and money laundering. The enterprise theory alleges that major resellers, platform intermediaries, lenders, and bot‑operators coordinated to engage in wire fraud, market manipulation, speculative ticketing, and deceptive consumer practices at international scale. Prosecutors present evidence of an organized structure that used bots, fabricated scarcity, misrepresentation of seat availability, and price‑fixing algorithms to inflate profits.

This becomes the first major criminal RICO prosecution in the secondary‑ticketing economy and triggers parallel state‑level investigations and civil RICO suits. Public resellers like StubHub will face shareholder lawsuits and securities fraud allegations.

Just another bright sunshiny day.

[A version of this post first appeared on MusicTechPolicy]




What Don Draper Knew That AI Forgot: Authorship, Ownership, and Advertising

David is pointing to a quiet but serious problem hiding behind the rush to use generative AI in advertising, film, and television: copyright law protects authorship, not outputs. AI muddies or even erases authorship altogether in some cases

Under current U.S. Copyright Office guidance, works generated primarily by AI are often not registrable in the Copyright Office because they lack a human author exercising creative control. That means a brand that relies on AI to generate a commercial may not actually own exclusive rights in the finished work. If someone copies, remixes, or repurposes that ad, even in a way that damages the brand, the company may have little or no legal recourse under copyright law.

The Copyright Office guidance says:

In the Office’s view, it is well-established that copyright can protect only material that is the product of human creativity. Most fundamentally, the term “author,” which is used in both the Constitution and the Copyright Act, excludes non-humans. The Office’s registration policies and regulations reflect statutory and judicial guidance on this issue….If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the “traditional elements of authorship” are determined and executed by the technology—not the human user

David has not identified a theoretical risk. Copyright is the backbone of brand control in media. It’s what allows companies to stop misuse, dilution, parody-turned-weapon, or hostile appropriation. In the US, a copyright registration is required to protect those rights. Remove that protection, and brands are left relying on weaker tools like trademark or unfair competition law, which are narrower, slower, and often ill-suited to digital remix culture.

David’s warning extends beyond ads. Film and TV studios experimenting with AI-generated scripts, scenes, music, or visuals may be undermining their own ability to control, license, or defend those works. In trying to save money upfront, they may be giving up the legal leverage that protects their brand, reputation, and long-term value.

Meet the New AI Boss, Worse Than the Old Internet Boss

Congress is considering several legislative packages to regulate AI. AI is a system that was launched globally with no safety standards, no threat modeling, and no real oversight. A system that externalized risk onto the public, created enormous security vulnerabilities, and then acted surprised when criminals, hostile states, and bad actors exploited it.

After the damage was done, the same companies that built it told governments not to regulate—because regulation would “stifle innovation.” Instead, they sold us cybersecurity products, compliance frameworks, and risk-management services to fix the problems they created.

Yes, artificial intelligence is a problem. Wait…Oh, no sorry. That’s not AI.

That’s was Internet. And it made the tech bros the richest ruling class in history.

And that’s why some of us are just a little skeptical when the same tech bros are now telling us: “Trust us, this time will be different.” AI will be different, that’s for sure. They’ll get even richer and they’ll rip us off even more this time. Not to mention building small nuclear reactors on government land that we paid for, monopolizing electrical grids that we paid for, and expecting us to fill the landscape with massive power lines that we will pay for.

The topper is that these libertines want no responsibility for anything, and they want to seize control of the levers of government to stop any accountability. But there are some in Congress who are serious about not getting fooled again.

Senator Marsha Blackburn released a summary of legislation she is sponsoring that gives us some cause for hope (read it here courtesy of our friends at the Copyright Alliance). Because her bill might be effective, that means Silicon Valley shills will be all over it to try to water it down and, if at all possible, destroy it. That attack of the shills has already started with Silicon Valley’s AI Viceroy in the Trump White House, a guy you may never have heard of named David Sacks. Know that name. Beware that name.

Senator Blackburn’s bill will do a lot of good things, including for protecting copyright. But the first substantive section of Senator Blackburn’s summary is a game changer. She would establish an obligation on AI platforms to be responsible for known or predictable harm that can befall users of AI products. This is sometimes called a “duty of care.”

Her summary states:

Place a duty of care on AI developers in the design, development, and operation of AI platforms to prevent and mitigate foreseeable harm to users. Additionally, this section requires:

• AI platforms to conduct regular risk assessments of how algorithmic systems, engagement mechanics, and data practices contribute to psychological, physical, financial, and exploitative harms.

• The Federal Trade Commission (FTC) to promulgate rules establishing minimum reasonable safeguards.

At its core, Senator Blackburn’s AI bill tries to force tech companies to play by rules that most other industries have followed for decades: if you design a product that predictably harms people, you have a responsibility to fix it.

That idea is called “products liability.” Simply put, it means companies can’t sell dangerous products and then shrug it off when people get hurt. Sounds logical, right? Sounds like what you would expect would happen if you did the bad thing? Car makers have to worry about the famous exploding gas tanks. Toy manufacturers have to worry about choking hazards. Drug companies have to test side effects. Tobacco companies….well, you know the rest. The law doesn’t demand perfection—but it does demand reasonable care and imposes a “duty of care” on companies that put dangerous products into the public.

Blackburn’s bill would apply that same logic to AI platforms. Yes, the special people would have to follow the same rules as everyone else with no safe harbors.

Instead of treating AI systems as abstract “speech” or neutral tools, the bill treats them as what they are: products with design choices. Those choices that can foreseeably cause psychological harm, financial scams, physical danger, or exploitation. Recommendation algorithms, engagement mechanics, and data practices aren’t accidents. They’re engineered. At tremendous expense. One thing you can be sure of is that if Google’s algorithms behave a certain way, it’s not because the engineers ran out of development money. The same is true of ChatGPT, Grok, etc. On a certain level of reality, this is very likely not guess work or predictability. It’s “known” rather than “should have known.” These people know exactly what their algorithms do. And they do it for the money.

The bill would impose that duty of care on AI developers and platform operators. A duty of care is a basic legal obligation to act reasonably to prevent foreseeable harm. “Foreseeable” doesn’t mean you can predict the exact victim or moment—it means you can anticipate the type of harm that flows to users you target from how the system is built.

To make that duty real, the bill would require companies to conduct regular risk assessments and make them public. These aren’t PR exercises. They would have to evaluate how their algorithms, engagement loops, and data use contribute to harms like addiction, manipulation, fraud, harassment, and exploitation.

They do this already, believe it. What’s different is that they don’t make it public, anymore than Ford made public the internal research that the Pinto’s gas tank was likely to explode. In other words, platforms would have to look honestly at what their systems actually do in the world—not just what they claim to do.

The bill also directs the Federal Trade Commission (FTC) to write rules establishing minimum reasonable safeguards. That’s important because it turns a vague obligation (“be responsible”) into enforceable standards (“here’s what you must do at a minimum”). Think of it as seatbelts and crash tests for AI systems.

So why do tech companies object? Because many of them argue that their algorithms are protected by the First Amendment—that regulating how recommendations work is regulating speech. Yes, that is a load of crap. It’s not just you, it really is BS.

Imagine Ford arguing that an exploding gas tank was “expressive conduct”—that drivers chose the Pinto to make a statement, and therefore safety regulation would violate Ford’s free speech rights. No court would take that seriously. A gas tank is not an opinion. It’s an engineered component with known risks and risks that were known to the manufacturer.

AI platforms are the same. When harm flows from design decisions—how content is ranked, how users are nudged, how systems optimize for engagement—that’s not speech. That’s product design. You can measure it, test it, audit it, which they do and make it safer which they don’t.

This part of Senator Blackburn’s bill matters because platform design shapes culture, careers, and livelihoods. Algorithms decide what gets seen, what gets buried, and what gets exploited. Blackburn’s bill doesn’t solve every problem, but it takes an important step: it says tech companies can’t hide dangerous products behind free-speech rhetoric anymore.

If you build it, and it predictably hurts people, you’re responsible for fixing it. That’s not censorship. It’s accountability. And people like Marc Andreessen, Sam Altman, Elon Musk and David Sacks will hate it.

Trump’s Historic Kowtow to Special Interests: Why Trump’s AI Executive Order Is a Threat to Musicians, States, and Democracy

There’s a new dance in Washington—it’s called the KowTow

Most musicians don’t spend their days thinking about executive orders. But if you care about your rights, your recordings, your royalties, or your community, or even the environment, you need to understand the Trump Administration’s new executive order on artificial intelligence. The order—presented as “Ensuring a National Policy Framework for AI”—is not a national standard at all. It is a blueprint for stripping states of their power, protecting Big Tech from accountability, and centralizing AI authority in the hands of unelected political operatives and venture capitalists. In other words, it’s business as usual for the special interests led by an unelected bureaucrat, Silicon Valley Viceroy and billionaire investor David Sacks who the New York Times recently called out as a walking conflict of interest.

You’ll Hear “National AI Standard.” That’s Fake News. IT’s Silicon valley’s wild west

Supporters of the EO claim Trump is “setting a national framework for AI.” Read it yourself. You won’t find a single policy on:
– AI systems stealing copyrights (already proven in court against Anthropic and Meta)
– AI systems inducing self-harm in children
– Whether Google can build a water‑burning data center or nuclear plant next to your neighborhood 

None of that is addressed. Instead, the EO orders the federal government to sue and bully states like Florida and Texas that pass AI safety laws and threatens to cut off broadband funding unless states abandon their democratically enacted protections. They will call this “preemption” which is when federal law overrides conflicting state laws. When Congress (or sometimes a federal agency) occupies a policy area, states lose the ability to enforce different or stricter rules. There is no federal legislation (EOs don’t count), so there can be no “preemption.”

Who Really Wrote This? The Sacks–Thierer Pipeline

This EO reads like it was drafted directly from the talking points of David Sacks and Adam Thierer, the two loudest voices insisting that states must be prohibited from regulating AI.  It sounds that way because it was—Trump himself gave all the credit to David Sacks in his signing ceremony.

– Adam Thierer works at Google’s R Street Institute and pushes “permissionless innovation,” meaning companies should be allowed to harm the public before regulation is allowed. 
– David Sacks is a billionaire Silicon Valley investor from South Africa with hundreds of AI and crypto investments, documented by The New York Times, and stands to profit from deregulation.

Worse, the EO lards itself with references to federal agencies coordinating with the “Special Advisor for AI and Crypto,” who is—yes—David Sacks. That means DOJ, Commerce, Homeland Security, and multiple federal bodies are effectively instructed to route their AI enforcement posture through a private‑sector financier.

The Trump AI Czar—VICEROY Without Senate Confirmation

Sacks is exactly what we have been warning about for months: the unelected Trump AI Czar

He is not Senate‑confirmed. 
He is not subject to conflict‑of‑interest vetting. 
He is a billionaire “special government employee” with vast personal financial stakes in the outcome of AI deregulation. 

Under the Constitution, you cannot assign significant executive authority to someone who never faced Senate scrutiny. Yet the EO repeatedly implies exactly that.

Even Trump’s MOST LOYAL MAGA Allies Know This Is Wrong

Trump signed the order in a closed ceremony with sycophants and tech investors—not musicians, not unions, not parents, not safety experts, not even one Red State governor.

Even political allies and activists like Mike Davis and Steve Bannon blasted the EO for gutting state powers and centralizing authority in Washington while failing to protect creators. When Bannon and Davis are warning you the order goes too far, that tells you everything you need to know. Well, almost everything.

And Then There’s Ted Cruz

On top of everything else, the one state official in the room was U.S. Senator Ted Cruz of Texas, a state that has led on AI protections for consumers. Cruz sold out Texas musicians while gutting the Constitution—knowing full well exactly what he was doing as a former Supreme Court clerk.

Why It Matters for Musicians

AI isn’t some abstract “tech issue.” It’s about who controls your work, your rights, your economic future. Right now:

– AI systems train on our recordings without consent or compensation. 
– Major tech companies use federal power to avoid accountability. 
– The EO protects Silicon Valley elites, not artists, fans or consumers. 

This EO doesn’t protect your music, your rights, or your community. It preempts local protections and hands Big Tech a federal shield.

It’s Not a National Standard — It’s a Power Grab

What’s happening isn’t leadership. It’s *regulatory capture dressed as patriotism*. If musicians, unions, state legislators, and everyday Americans don’t push back, this EO will become a legal weapon used to silence state protections and entrench unaccountable AI power.

What David Sacks and his band of thieves is teaching the world is that he learned from Dot Bomb 1.0—the first time around, they didn’t steal enough. If you’re going to steal, steal all of it. Then the government will protect you.


The Word “If” is for Losers: Gene Simmons Nails It on American Music Fairness Act

Senator Marsha Blackburn and Gene Simmons testified today at the Senate Judiciary Committee on fixing artist pay for radio play with the American Music Fairness Act—both knocked it out of the park. As Senator Blackburn said very clearly, corporate radio wants to be treated as a special and protected class for no good reason. As she said, the creative community has waited a very long time for fair treatment.

Now Gene Simmons…lived up to his billing. Absolutely charming and emphatic about driving AMFA through the tape. Has to be seen to be believed and absolutely electrifying. As he said, artists need radio and radio needs artists. “Let’s get with it” NAB.

You can watch the entire hearing above or Gene’s written testimony and highlight reel below.

Gene Simmons and the American Music Fairness Act

Gene Simmons is receiving Kennedy Center Honors with KISS this Sunday, and is also bringing his voice to the fair pay for radio play campaign to pass the American Music Fairness Act (AMFA).

Gene will testify on AMFA next week before the Senate Judiciary Committee. He won’t just be speaking as a member of KISS or as one of the most recognizable performers in American music. He’ll be showing up as a witness to something far more universal: the decades-long exploitation of recording artists whose work powers an entire broadcast industry and that has never paid them a dime. Watch Gene’s hearing on December 9th at 3pm ET at this link, when Gene testifies alongside SoundExchange CEO Mike Huppe.

As Gene argued in his Washington Post op-ed, the AM/FM radio loophole is not a quirky relic, it is legalized taking. Everyone else pays for music: streaming services, satellite radio, social-media platforms, retail, fitness, gaming. Everyone except big broadcast radio, which generated more than $13 billion in advertising revenue last year while paying zero to the performers whose recordings attract those audiences.

Gene is testifying not just for legacy acts, but for the “thousands of present and future American recording artists” who, like KISS in the early days, were told to work hard, build a fan base, and just be grateful for airplay. As he might put it, artists were expected to “rock and roll all night” — but never expect to be paid for it on the radio.

And when artists asked for change, they were told to wait. They “keep on shoutin’,” decade after decade, but Congress never listened.

That’s why this hearing matters. It’s the first Senate-level engagement with the issue since 2009. The ground is shifting. Gene Simmons’ presence signals something bigger: artists are done pretending that “exposure” is a form of compensation.

AMFA would finally require AM/FM broadcasters to pay for the sound recordings they exploit, the same way every other democratic nation already does. It would give session musicians, backup vocalists, and countless independent artists a revenue stream they should have had all along. It would even unlock international royalties currently withheld from American performers because the U.S. refuses reciprocity.

And let’s be honest: Gene Simmons is an ideal messenger. He built KISS from nothing, understands the grind, and knows exactly how many hands touch a recording before it reaches the airwaves. His testimony exposes the truth: radio isn’t “free promotion” — it’s a commercial business built on someone else’s work.

Simmons once paraphrased the music economy as a game where artists are expected to give endlessly while massive corporations act like the only “god of thunder,” taking everything and returning nothing. AMFA is an overdue correction to that imbalance.

When Gene sits down before the Senate Judiciary Committee, he won’t be wearing the makeup. He won’t need to. He’ll be carrying something far more powerful: the voices of artists who’ve waited 80 years for Congress to finally turn the volume up on fairness.

NYT: Silicon Valley’s Man in the White House Is Benefiting Himself and His Friends

This image has an empty alt attribute; its file name is image-9.png

The New York Times published a sprawling investigation into David Sacks’s role as Trump’s A.I. and crypto czar. We’ve talked about David Sacks a few times on these pages. The Times’ piece is remarkable in scope and reporting: a venture capitalist inside the White House, steering chip policy, promoting deregulation, raising money for Trump, hosting administration events through his own podcast brand, and retaining hundreds of A.I. and crypto investments that stand to benefit from his policy work.

But for all its detail, the Times buried the lede.

The bigger story isn’t just ethics violations. or outright financial corruption. It’s that Sacks is simultaneously shaping and shielding the largest regulatory power grab in history: the A.I. moratorium and its preemption structure.

Of all the corrupt anecdotes in the New York Times must read article regarding Viceroy and leading Presidential pardon candidate David Sacks, they left out the whole AI moratorium scam, focusing instead on the more garden variety of self-dealing and outright conflicts of interest that are legion. My bet is that Mr. Sacks reeks so badly that it is hard to know what to leave out. Here’s a couple of examples:

This image has an empty alt attribute; its file name is image-10.png

There is a deeper danger that the Times story never addresses: the long-term damage that will outlive David Sacks himself. Even if Sacks eventually faces investigations or prosecution for unrelated financial or securities matters — if he does — the real threat isn’t what happens to him. It’s what happens to the legal architecture he is building right now.

This image has an empty alt attribute; its file name is sacks-american-flag.jpg

If he succeeds in blocking state-law prosecutions and freezing A.I. liability for a decade, the harms won’t stop when he leaves office. They will metastasize.

Without state enforcement, A.I. companies will face no meaningful accountability for:

  • child suicide induced by unregulated synthetic content
  • mass copyright theft embedded into permanent model weights
  • biometric and voiceprint extraction without consent
  • data-center sprawl that overwhelms local water, energy, and zoning systems
  • surveillance architectures exported globally
  • algorithmic harms that cannot be litigated under preempted state laws

These harms don’t sunset when an administration ends. They calcify. It must also be said that Sacks could face state securities-law liability — including fraud, undisclosed self-dealing, and market-manipulative conflicts tied to his A.I. portfolio — because state blue-sky statutes impose duties possibly stricter than federal law. The A.I. moratorium’s preemption would vaporize these claims, shielding exactly the conduct state regulators are best positioned to police. No wonder he’s so committed to sneaking it into federal law.

The moratorium Sacks is pushing would prevent states from acting at the very moment when they are the only entities with the political will and proximity to regulate A.I. on the ground. If he succeeds, the damage will last long after Sacks has left his government role — long after his podcast fades, long after his investment portfolio exits, long after any legal consequences he might face.

The public will be living inside the system he designed.

There is one final point the public needs to understand. DavidSacksis not an anomaly. Sacks is to Trump what Eric Schmidt was to Biden: the industry’s designated emissary, embedded inside the White House to shape federal technology policy from the inside out. Swap the party labels and the personnel change, but the structural function remains the same. Remember, Schmidt bragged about writing the Biden AI executive order.

This image has an empty alt attribute; its file name is of-all-the-ceos-google-interviewed-eric-schmidt-was-the-only-one-that-had-been-to-burning-man-which-was-a-major-plus.jpg

So don’t think that if Sacks is pushed out, investigated, discredited, or even prosecuted one day — if he is — that the problem disappears. You don’t eliminate regulatory capture by removing the latest avatar of it. The next administration will simply install a different billionaire with a different portfolio and the same incentives: protect industry, weaken oversight, preempt the states, and expand the commercial reach of the companies they came in with.

The danger is not David Sacks the individual. The danger is the revolving door that lets tech titans write national A.I. policy while holding the assets that benefit from it. As much as Trump complains of the “deep state,” he’s doing his best to create the deepest of deep states.

Until that underlying structure changes, it won’t matter whether it’s Sacks, Schmidt, Thiel, Musk, Palihapitiya, or the next “technocratic savior.”

The system will keep producing them — and the public will keep paying the price. For as Sophocles taught us, it is not in our power to escape the curse.

@ArtistRights Institute Newsletter 11/17/25: Highlights from a fast-moving week in music policy, AI oversight, and artist advocacy.

American Music Fairness Act

Don’t Let Congress Reward the Stations That Don’t Pay Artists (Editor Charlie/Artist Rights Watch)

Trump AI Executive Order

White House drafts order directing Justice Department to sue states that pass AI regulations (Gerrit De Vynck and Nitasha Tiku/Washington Post)

DOJ Authority and the “Because China” Trump AI Executive Order (Chris Castle/MusicTech.Solutions)

THE @DAVIDSACKS/ADAM THIERER EXECUTIVE ORDER CRUSHING PROTECTIVE STATE LAWS ON AI—AND WHY NO ONE SHOULD BE SURPRISED THAT TRUMP TOOK THE BAIT

Bartz Settlement

WHAT $1.5 BILLION GETS YOU:  AN OBJECTOR’S GUIDE TO THE BARTZ SETTLEMENT (Chris Castle/MusicTechPolicy)

Ticketing

StubHub’s First Earnings Faceplant: Why the Ticket Reseller Probably Should Have Stayed Private (Chris Castle/ArtistRightsWatch)

The UK Finally Moves to Ban Above-Face-Value Ticket Resale (Chris Castle/MusicTech.Solutions)

Ashley King: Oasis Praises Victoria’s Strict Anti-Scalping Laws While on Tour in Oz — “We Can Stop Large-Scale Scalping In Its Tracks” (Artist Rights Watch/Digital Music News)

NMPA/Spotify Video Deal

GUEST POST: SHOW US THE TERMS: IMPLICATIONS OF THE SPOTIFY/NMPA DIRECT AUDIOVISUAL LICENSE FOR INDEPENDENT SONGWRITERS (Gwen Seale/MusicTechPolicy)

WHAT WE KNOW—AND DON’T KNOW—ABOUT SPOTIFY AND NMPA’S “OPT-IN” AUDIOVISUAL DEAL (Chris Castle/MusicTechPolicy)